AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Translation Quality Scoring

# Translation Quality Scoring

M Prometheus 14B
Other
M-Prometheus is an open-source LLM evaluation model capable of natively supporting multilingual output assessment. Trained on 480,000 multilingual direct evaluation and pairwise comparison data with long-text feedback.
Large Language Model Transformers
M
Unbabel
120
6
M Prometheus 7B
Other
M-Prometheus is an open-source LLM evaluation model capable of natively supporting multilingual output evaluation. It was trained on 480,000 multilingual direct evaluation and paired comparison data points.
Large Language Model Transformers
M
Unbabel
238
3
M Prometheus 3B
Other
M-Prometheus is an open-source LLM evaluation model that natively supports multilingual output evaluation. It is trained on 480,000 multilingual direct evaluation and pairwise comparison data points with detailed feedback.
Large Language Model Transformers
M
Unbabel
97
4
Unite Mup
Apache-2.0
UniTE is a unified framework for translation quality evaluation, supporting multilingual translation assessment.
Machine Translation Transformers
U
ywan
173
0
Unite Up
Apache-2.0
UniTE is a unified framework for evaluating translation quality, specifically optimized for English target translation tasks.
Machine Translation Transformers
U
ywan
181
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase